Adam Pultz Melbye
The following interview was conducted by Jeff Brown and published in the German edition of the Van Magazine. the magazine has kindly permitted me to post the English version here.

1. Has your work on this Musikfonds project with AI led you to make concrete changes to the way the feedback-actuated augmented bass works? Or are you keeping the original FAAB setup and dealing with AI on more of a conceptual level?

Since 2020 I’ve had the idea to install motorized tuners on the FAAB but was overwhelmed with the technical difficulties of installing 20th-century technology on an instrument developed in the 17th century. When I started working with the STIP-4 grant, I realized that I would finally have the time and financial stability to develop this idea. Eventually, I decided to not implement the motorized tuning mechanisms on the FAAB itself, but to build a new instrument that could serve as an extension of the FAAB as well as perform as an instrument on its own. Hence, the Spectral Parrot was born. In its most basic form, the Parrot is a wooden board with 8 strings, the tuning mechanisms of which are coupled to motors that in turn are controlled via a laptop. Similar to the FAAB, the Parrot’s strings are amplified through the instrument’s own body, meaning that when the amplification level gets high enough, the strings will go into self-oscillation, also known as feedback. As such, I don’t need to physically play the instrument—it will sound on its own. Where AI enters the picture is in how the motors are controlled, and hence how the instrument is tuned, de-tuned, or re-tuned (depending on one’s perspective) during performance. The name The Spectral Parrot alludes to the fact that the instrument attempts to mimic audio in the sense that it learns to match other sound sources such as recordings or instruments, for example the FAAB, played in real-time. By using a machine learning architecture called deep reinforcement learning, it slowly learns to adjust the motors to tune its eight strings in a way that approximates the sound it is trying to match. So while the FAAB itself hasn’t changed much during the stipend, it has entered into conversations with a new interlocutor.

2. What is it like playing on a bass that re- and detunes as you're playing it? How has that affected the way you approach the instrument?

Imagine you are presented with a sound and you have to tune eight strings to match that sound. Even for an expert musician, this would be a difficult task and you would have to turn the tuners fairly slowly and in incremental steps to make sure that you don’t overshoot. The Parrot faces similar challenges and since it is a feedback instrument, it’s task is further complicated by the fact that slightly changing the tension of one string may cause the other strings to change their response as well. Its learning process is therefore defined by slowly changing timbres and lots of exploration that, especially in the early learning stages, rarely manages to approach the sound of the FAAB. However, rather than seeing this as a shortcoming of the instrument, I perceive its tuning idiosyncrasies as a natural part of learning. In fact, the more I have been working with machine learning, the less interested I have become in AI as an optimization tool. All of the AIs we interact with on a daily basis, and many of the machine learning systems used in the arts, have been trained prior to being released into the world. At least from an engineering perspective, this is understandable, as you don’t want ChatGPT to spit out nonsense for months until you have taught it to string a sentence together. In this project I am more interested in performing with a system while it is learning than performing with an AI that already knows what it is doing.

3. How have you approached relinquishing agency as a performer on stage? Is that ever a frightening feeling? And what does that relinquishing of agency look like in the context of this AI-focused project?

I think there’s a difference between relinquishing full control and relinquishing agency. In classical music, as well as the institutionalized jazz tradition I was originally trained in, there’s a focus on instrumental proficiency combined with a fetischization of the individual performer. This focus can overshadow music-making as a social practice in which human and possibly non-human agents co-create the music. I think a similar kind of essentially romantic but also neoliberal individualism has shaped how we think about agency in the first place. I don’t think that agency can be located with any single performer, or more generally any individual person, but is rather an emergent property of interaction. One of the things I’ve been exploring extensively with the FAAB is how to approach human-machine interaction as a shared kind of agency. But more and more, I tend to shy away from the term agency as I find it often gets used as a kind of fairy-dust we sprinkle on things we can’t fully grasp in order to render them ontologically vague. But since I don’t have a suggestion for a better term, let’s stick with agency for now: yes, it can be terrifying when things don’t go as you expect them to, even if you build your instrument to exhibit its own autonomy as I have tried to do with the FAAB and the Parrot. I am still trying to learn how to creatively negotiate truly unexpected behaviours that transcend the nonlinear sound processing algorithms I have coded myself. For example, when I played my first Berlin concert with the FAAB, I encountered a periodic digital glitch that completely threw me off. As an exercise in working through this trauma, I compiled a video of every single glitch in that performance:

In the context of the Parrot, I can define the parameters for its learning, but once it actually starts training alongside me playing the FAAB, I don’t have control over what happens. During performance, then, making sense of the Parrot’s sometimes perplexing choices and explorations involves adapting my own playing, which is not unlike to how I would approach performing alongside a human improviser.

4. In concrete terms, how is your new work "critically assessing the promises versus the realities of machine learning as a supposedly autonomous tool in artistic practice and beyond”? How do you think about that gap in the context of making art?

The Spectral Parrot is named after the paper “On the Dangers of Stochastic Parrots” (Bender, Gebru, McMillan-Major, and Shmargaret Schmitchell) which critically reflects on the kind of machine learning that has become known as Large Language Models (LLMs) of which ChatGPT is one. Perhaps the most astonishing feat of modern machine learning is how major tech companies have created the narrative that AI can save us from work and heralds a new age of affluence. After all, their models are trained on our collective knowledge, their software is scaffolded by millions of hours of building and testing, and is being indiscriminately employed for military purposes, such as in the genocide in Gaza. It should come as no surprise that AI in many cases perpetuate the injustice and violence of capitalism and colonialism—in this regard it is no different than any other technology. But I think that we, as artists are in a position to challenge the techno-optimist narratives of neoliberalism. While commercial LLMs are trained at high speed in enormous server farms, the Spectral Parrot is bounded by its physical constitution and learns in real time which is the time it takes for audio to play and motors to turn. This de-acceleration may at first seem underwhelming—after all, it currently takes the instrument an estimated 8-10 hours to learn to approximate a single sound. Yet, that also means that we, as humans, gain some access to the learning processes that shape the system, while I, as a performer, can interact with the instrument as it learns. Since I also shape my playing to the Parrots’s slow learning, the learning is in fact a shared process in which the supposed objective for the Parrot—to learn the sound of what I play—becomes a moving goal post. While I am attracted to the creative potentials of this kind of de-acceleration, I also think that de-accelerated artistic and speculative approaches to machine learning has a subversive potential in the multidimensional struggle to reclaim technology as benefitting people rather than big tech and state surveillance.